18 research outputs found
FP-PET: Large Model, Multiple Loss And Focused Practice
This study presents FP-PET, a comprehensive approach to medical image
segmentation with a focus on CT and PET images. Utilizing a dataset from the
AutoPet2023 Challenge, the research employs a variety of machine learning
models, including STUNet-large, SwinUNETR, and VNet, to achieve
state-of-the-art segmentation performance. The paper introduces an aggregated
score that combines multiple evaluation metrics such as Dice score, false
positive volume (FPV), and false negative volume (FNV) to provide a holistic
measure of model effectiveness. The study also discusses the computational
challenges and solutions related to model training, which was conducted on
high-performance GPUs. Preprocessing and postprocessing techniques, including
gaussian weighting schemes and morphological operations, are explored to
further refine the segmentation output. The research offers valuable insights
into the challenges and solutions for advanced medical image segmentation
Recommended from our members
Subsecond total-body imaging using ultrasensitive positron emission tomography.
A 194-cm-long total-body positron emission tomography/computed tomography (PET/CT) scanner (uEXPLORER), has been constructed to offer a transformative platform for human radiotracer imaging in clinical research and healthcare. Its total-body coverage and exceptional sensitivity provide opportunities for innovative studies of physiology, biochemistry, and pharmacology. The objective of this study is to develop a method to perform ultrahigh (100 ms) temporal resolution dynamic PET imaging by combining advanced dynamic image reconstruction paradigms with the uEXPLORER scanner. We aim to capture the fast dynamics of initial radiotracer distribution, as well as cardiac motion, in the human body. The results show that we can visualize radiotracer transport in the body on timescales of 100 ms and obtain motion-frozen images with superior image quality compared to conventional methods. The proposed method has applications in studying fast tracer dynamics, such as blood flow and the dynamic response to neural modulation, as well as performing real-time motion tracking (e.g., cardiac and respiratory motion, and gross body motion) without any external monitoring device (e.g., electrocardiogram, breathing belt, or optical trackers)
Ultrasonic Image's Annotation Removal: A Self-supervised Noise2Noise Approach
Accurately annotated ultrasonic images are vital components of a high-quality
medical report. Hospitals often have strict guidelines on the types of
annotations that should appear on imaging results. However, manually inspecting
these images can be a cumbersome task. While a neural network could potentially
automate the process, training such a model typically requires a dataset of
paired input and target images, which in turn involves significant human
labour. This study introduces an automated approach for detecting annotations
in images. This is achieved by treating the annotations as noise, creating a
self-supervised pretext task and using a model trained under the Noise2Noise
scheme to restore the image to a clean state. We tested a variety of model
structures on the denoising task against different types of annotation,
including body marker annotation, radial line annotation, etc. Our results
demonstrate that most models trained under the Noise2Noise scheme outperformed
their counterparts trained with noisy-clean data pairs. The costumed U-Net
yielded the most optimal outcome on the body marker annotation dataset, with
high scores on segmentation precision and reconstruction similarity. We
released our code at https://github.com/GrandArth/UltrasonicImage-N2N-Approach.Comment: 10 pages, 7 figure
Deep Learning in Single-Cell Analysis
Single-cell technologies are revolutionizing the entire field of biology. The
large volumes of data generated by single-cell technologies are
high-dimensional, sparse, heterogeneous, and have complicated dependency
structures, making analyses using conventional machine learning approaches
challenging and impractical. In tackling these challenges, deep learning often
demonstrates superior performance compared to traditional machine learning
methods. In this work, we give a comprehensive survey on deep learning in
single-cell analysis. We first introduce background on single-cell technologies
and their development, as well as fundamental concepts of deep learning
including the most popular deep architectures. We present an overview of the
single-cell analytic pipeline pursued in research applications while noting
divergences due to data sources or specific applications. We then review seven
popular tasks spanning through different stages of the single-cell analysis
pipeline, including multimodal integration, imputation, clustering, spatial
domain identification, cell-type deconvolution, cell segmentation, and
cell-type annotation. Under each task, we describe the most recent developments
in classical and deep learning methods and discuss their advantages and
disadvantages. Deep learning tools and benchmark datasets are also summarized
for each task. Finally, we discuss the future directions and the most recent
challenges. This survey will serve as a reference for biologists and computer
scientists, encouraging collaborations.Comment: 77 pages, 11 figures, 15 tables, deep learning, single-cell analysi
Evaluation of a Wobbling Method Applied to Correcting Defective Pixels of CZT Detectors in SPECT Imaging
In this paper, we propose a wobbling method to correct bad pixels in cadmium zinc telluride (CZT) detectors, using information of related images. We build up an automated device that realizes the wobbling correction for small animal Single Photon Emission Computed Tomography (SPECT) imaging. The wobbling correction method is applied to various constellations of defective pixels. The corrected images are compared with the results of conventional interpolation method, and the correction effectiveness is evaluated quantitatively using the factor of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). In summary, the proposed wobbling method, equipped with the automatic mechanical system, provides a better image quality for correcting defective pixels, which could be used for all pixelated detectors for molecular imaging
Evaluation of a Wobbling Method Applied to Correcting Defective Pixels of CZT Detectors in SPECT Imaging
In this paper, we propose a wobbling method to correct bad pixels in cadmium zinc telluride (CZT) detectors, using information of related images. We build up an automated device that realizes the wobbling correction for small animal Single Photon Emission Computed Tomography (SPECT) imaging. The wobbling correction method is applied to various constellations of defective pixels. The corrected images are compared with the results of conventional interpolation method, and the correction effectiveness is evaluated quantitatively using the factor of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). In summary, the proposed wobbling method, equipped with the automatic mechanical system, provides a better image quality for correcting defective pixels, which could be used for all pixelated detectors for molecular imaging
Recommended from our members
Anatomically aided PET image reconstruction using deep neural networks
PurposeThe developments of PET/CT and PET/MR scanners provide opportunities for improving PET image quality by using anatomical information. In this paper, we propose a novel co-learning three-dimensional (3D) convolutional neural network (CNN) to extract modality-specific features from PET/CT image pairs and integrate complementary features into an iterative reconstruction framework to improve PET image reconstruction.MethodsWe used a pretrained deep neural network to represent PET images. The network was trained using low-count PET and CT image pairs as inputs and high-count PET images as labels. This network was then incorporated into a constrained maximum likelihood framework to regularize PET image reconstruction. Two different network structures were investigated for the integration of anatomical information from CT images. One was a multichannel CNN, which treated PET and CT volumes as separate channels of the input. The other one was multibranch CNN, which implemented separate encoders for PET and CT images to extract latent features and fed the combined latent features into a decoder. Using computer-based Monte Carlo simulations and two real patient datasets, the proposed method has been compared with existing methods, including the maximum likelihood expectation maximization (MLEM) reconstruction, a kernel-based reconstruction and a CNN-based deep penalty method with and without anatomical guidance.ResultsReconstructed images showed that the proposed constrained ML reconstruction approach produced higher quality images than the competing methods. The tumors in the lung region have higher contrast in the proposed constrained ML reconstruction than in the CNN-based deep penalty reconstruction. The image quality was further improved by incorporating the anatomical information. Moreover, the liver standard deviation was lower in the proposed approach than all the competing methods at a matched lesion contrast.ConclusionsThe supervised co-learning strategy can improve the performance of constrained maximum likelihood reconstruction. Compared with existing techniques, the proposed method produced a better lesion contrast versus background standard deviation trade-off curve, which can potentially improve lesion detection
Recommended from our members
Subsecond total-body imaging using ultrasensitive positron emission tomography.
A 194-cm-long total-body positron emission tomography/computed tomography (PET/CT) scanner (uEXPLORER), has been constructed to offer a transformative platform for human radiotracer imaging in clinical research and healthcare. Its total-body coverage and exceptional sensitivity provide opportunities for innovative studies of physiology, biochemistry, and pharmacology. The objective of this study is to develop a method to perform ultrahigh (100 ms) temporal resolution dynamic PET imaging by combining advanced dynamic image reconstruction paradigms with the uEXPLORER scanner. We aim to capture the fast dynamics of initial radiotracer distribution, as well as cardiac motion, in the human body. The results show that we can visualize radiotracer transport in the body on timescales of 100 ms and obtain motion-frozen images with superior image quality compared to conventional methods. The proposed method has applications in studying fast tracer dynamics, such as blood flow and the dynamic response to neural modulation, as well as performing real-time motion tracking (e.g., cardiac and respiratory motion, and gross body motion) without any external monitoring device (e.g., electrocardiogram, breathing belt, or optical trackers)